Spiking neural networks (SNNs) fall into the third generation of neural network models, increasing the level of realism in a neural simulation. In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather fire only when a membrane potential - an intrinsic quality of the neuron related to its membrane electrical charge - reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal.
In the context of spiking neural networks, the current activation level (modeled as some differential equation) is normally considered to be the neuron's state, with incoming spikes pushing this value higher, and then either firing or decaying over time. Various coding methods exist for interpreting the outgoing spike train as a real-value number, either relying on the frequency of spikes, or the timing between spikes, to encode information.
Contents |
The first scientific model of a spiking neuron was proposed by Alan Lloyd Hodgkin and Andrew Huxley in 1952. This model describes how action potentials are initiated and propagated. Spikes, however are not generally transmitted directly between neurons, communication requires the exchange of chemical substances in the synaptic gap, called neurotransmitters. The complexity and variability of biological models have resulted in various neuron models, such as the integrate-and-fire (1907), FitzHugh-Nagumo (1961-1962) and Hindmarsh-Rose model (1984).
From the information theory point of view, the problem is to propose a model that explains how information is encoded and decoded by a series of trains of pulses, i.e., action potentials. Thus, one of the fundamental questions of neuroscience is to determine if neurons communicated by a rate code or by a pulse code.[1]
Early results with spiking neural models suggested that by using temporal coding, networks of spiking neurons may gain more computational power than traditional neural networks. It was also suggested that, under certain conditions, any multilayered perceptron can be simulated closely by a network consisting of spiking neurons.
This kind of neural network can in principle be used for information processing applications the same way as traditional artificial neural networks. However due to their more realistic properties, they can also be used to study the operation of biological neural circuits. Starting with a hypothesis about the topology of a biological neuronal circuit and its function, the electrophysiological recordings of this circuit can be compared to the output of the corresponding spiking artificial neural network simulated on computer, determining the plausibility of the starting hypothesis.
In practice, there is a major difference between the theoretical power of spiking neural networks and what has been demonstrated. They have proved useful in neuroscience, but not (yet) in engineering. To date, there have been no large scale spiking neural networks that solve computational tasks of the order and complexity of rate coded (second generation) neural networks. It is relatively easy to construct a spiking neural network model and observe its dynamics. It is much harder to develop a model with stable behavior that computes a specific function.
There is diverse range of application software to simulate spiking neural networks. This software can be classified according to the use of the simulation:
Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware.